Goto

Collaborating Authors

 Commercial Services & Supplies


BotsLab 4-Cam W510 System review: This security package doesn't deliver

PCWorld

When you purchase through links in our articles, we may earn a small commission. BotsLab 4-Cam W510 System review: This security package doesn't deliver Four 4K cameras, a base station with expandable local storage, and no subscription required, So, what's the catch? This four-camera system impresses with solid video quality and expandable local storage, but only when those cameras are in such close range that they probably won't provide full coverage of your property. Outfitting your home with outdoor security cameras can get complicated--and expensive--quickly. Anyone looking for a shortcut on both fronts might consider one of BotsLab's W510 kits, bundles consisting of up to six 4K outdoor pan/tilt security cameras, solar panels to keep each camera's battery topped off, and a base station with 32GB of onboard storage (expandable up to 16TB with a user-supplied 2.5 hard drive).


3 Best Floodlight Security Cameras (2026), Tested and Reviewed

WIRED

Light up and secure your driveway, backyard, or porch with a floodlight security camera. Floodlight security cameras are a great way to light up your property. Shady areas around your home can make life easier for would-be burglars, and make it harder for you to plug in the car or take out the trash. Motion-triggered lighting is an essential minimum, but with a floodlight security camera, you get that a videofeed. Floodlight cameras are also far more configurable and reliable than lights; they let you check in on your property from the office or bed, and they can alert you to intruders. While this guide covers floodlight security cameras, we also have guides to the Best Outdoor Security Cameras, Best Indoor Security Cameras, and Best Video Doorbells .


Tapo C615F Kit floodlight cam review: Lights, camera, solar!

PCWorld

When you purchase through links in our articles, we may earn a small commission. Most floodlight cams need hardwired power, limiting your installation options. This battery-powered model can go anywhere, and it has a solar panel, too! Single floodlight isn't as bright as you get with hardwired models Despite a couple of minor bugs, this low-cost, battery-powered floodlight camera knocks it out of the park in most respects. The "Kit" in TP-Link Tapo C615F Kit refers to the inclusion of a solar panel that comes with this full-featured security camera/floodlight combo to keep its battery charged.


Annke FCD800 security cam review: A single camera that sees all

PCWorld

When you purchase through links in our articles, we may earn a small commission. This dual-4K-lens PoE turret camera gives you a full 180 degrees of coverage, night lighting, and AI detection in one tidy, affordable unit. The Annke FCD800 delivers sharp panoramic coverage, smart detection, and solid deterrence at a great price, making it an easy recommendation for anyone who needs to monitor a large area with a single, reliable camera on a tight budget and has the required infrastructure in place (or plans to add it). Not long ago, panoramic security camera coverage required installing multiple units and you'd still end up with blind spots. Then dual-lens models came along and promised to fix that by stitching two views into one wide shot.


Verifying LLM Inference to Detect Model Weight Exfiltration

Rinberg, Roy, Karvonen, Adam, Hoover, Alexander, Reuter, Daniel, Warr, Keri

arXiv.org Artificial Intelligence

As large AI models become increasingly valuable assets, the risk of model weight exfiltration from inference servers grows accordingly. An attacker controlling an inference server may exfiltrate model weights by hiding them within ordinary model outputs, a strategy known as steganography. This work investigates how to verify model responses to defend against such attacks and, more broadly, to detect anomalous or buggy behavior during inference. We formalize model exfiltration as a security game, propose a verification framework that can provably mitigate steganographic exfiltration, and specify the trust assumptions associated with our scheme. To enable verification, we characterize valid sources of non-determinism in large language model inference and introduce two practical estimators for them. We evaluate our detection framework on several open-weight models ranging from 3B to 30B parameters. On MOE-Qwen-30B, our detector reduces exfiltratable information to <0.5% with false-positive rate of 0.01%, corresponding to a >200x slowdown for adversaries. Overall, this work further establishes a foundation for defending against model weight exfiltration and demonstrates that strong protection can be achieved with minimal additional cost to inference providers.


Modular Deep-Learning-Based Early Warning System for Deadly Heatwave Prediction

Xu, Shangqing, Zhao, Zhiyuan, Sharma, Megha, Martín-Olalla, José María, Rodríguez, Alexander, Wellenius, Gregory A., Prakash, B. Aditya

arXiv.org Artificial Intelligence

Severe heatwaves in urban areas significantly threaten public health, calling for establishing early warning strategies. Despite predicting occurrence of heatwaves and attributing historical mortality, predicting an incoming deadly heatwave remains a challenge due to the difficulty in defining and estimating heat-related mortality. Furthermore, establishing an early warning system imposes additional requirements, including data availability, spatial and temporal robustness, and decision costs. To address these challenges, we propose DeepTherm, a modular early warning system for deadly heatwave prediction without requiring heat-related mortality history. By highlighting the flexibility of deep learning, DeepTherm employs a dual-prediction pipeline, disentangling baseline mortality in the absence of heatwaves and other irregular events from all-cause mortality. We evaluated DeepTherm on real-world data across Spain. Results demonstrate consistent, robust, and accurate performance across diverse regions, time periods, and population groups while allowing trade-off between missed alarms and false alarms.


CryptoTensors: A Light-Weight Large Language Model File Format for Highly-Secure Model Distribution

Zhu, Huifeng, Li, Shijie, Li, Qinfeng, Jin, Yier

arXiv.org Artificial Intelligence

To enhance the performance of large language models (LLMs) in various domain-specific applications, sensitive data such as healthcare, law, and finance are being used to privately customize or fine-tune these models. Such privately adapted LLMs are regarded as either personal privacy assets or corporate intellectual property. Therefore, protecting model weights and maintaining strict confidentiality during deployment and distribution have become critically important. However, existing model formats and deployment frameworks provide little to no built-in support for confidentiality, access control, or secure integration with trusted hardware. Current methods for securing model deployment either rely on computationally expensive cryptographic techniques or tightly controlled private infrastructure. Although these approaches can be effective in specific scenarios, they are difficult and costly for widespread deployment. In this paper, we introduce CryptoTensors, a secure and format-compatible file structure for confidential LLM distribution. Built as an extension to the widely adopted Safetensors format, CryptoTensors incorporates tensor-level encryption and embedded access control policies, while preserving critical features such as lazy loading and partial deserialization. It enables transparent decryption and automated key management, supporting flexible licensing and secure model execution with minimal overhead. We implement a proof-of-concept library, benchmark its performance across serialization and runtime scenarios, and validate its compatibility with existing inference frameworks, including Hugging Face Transformers and vLLM. Our results highlight CryptoTensors as a light-weight, efficient, and developer-friendly solution for safeguarding LLM weights in real-world and widespread deployments.


AI video descriptions are coming to Blink security cameras

PCWorld

When you purchase through links in our articles, we may earn a small commission. Following in the footsteps of Ring and Google's Nest cameras, Blink subscribers will soon see AI-generated summaries of individual video events. AI is already crafting natural-language summaries of what Amazon's Ring and Google's Nest cameras are seeing, and now the AI-generated descriptions are coming to Blink cameras, too. Slated to begin rolling out today in beta to U.S. users, Blink Video Descriptions will employ AI to analyze the video events captured by Blink security cameras and then generate descriptions of what's happening. The feature, which will work with all existing Blink cameras and doorbells, will start off as a free preview for "select" Blink Basic and Plus subscribers, according to a Blink spokesperson.


Enabling Ethical AI: A case study in using Ontological Context for Justified Agentic AI Decisions

McGee, Liam, Harvey, James, Cull, Lucy, Vermeulen, Andreas, Visscher, Bart-Floris, Sharan, Malvika

arXiv.org Artificial Intelligence

Agentic AI systems, software agents with autonomy, decision-making ability, and adaptability, are increasingly used to execute complex tasks on behalf of organisations. Most such systems rely on Large Language Models (LLMs), whose broad semantic capabilities enable powerful language processing but lack explicit, institution-specific grounding. In enterprises, data rarely comes with an inspectable semantic layer, and constructing one typically requires labour-intensive "data archaeology": cleaning, modelling, and curating knowledge into ontologies, taxonomies, and other formal structures. At the same time, explainability methods such as saliency maps expose an "interpretability gap": they highlight what the model attends to but not why, leaving decision processes opaque. In this preprint, we present a case study, developed by Kaiasm and Avantra AI through their work with The Turing Way Practitioners Hub, a forum developed under the InnovateUK BridgeAI program. This study presents a collaborative human-AI approach to building an inspectable semantic layer for Agentic AI. AI agents first propose candidate knowledge structures from diverse data sources; domain experts then validate, correct, and extend these structures, with their feedback used to improve subsequent models. Authors show how this process captures tacit institutional knowledge, improves response quality and efficiency, and mitigates institutional amnesia. We argue for a shift from post-hoc explanation to justifiable Agentic AI, where decisions are grounded in explicit, inspectable evidence and reasoning accessible to both experts and non-specialists.


AGENTSAFE: A Unified Framework for Ethical Assurance and Governance in Agentic AI

Khan, Rafflesia, Joyce, Declan, Habiba, Mansura

arXiv.org Artificial Intelligence

The rapid deployment of large language model (LLM)-based agents introduces a new class of risks, driven by their capacity for autonomous planning, multi-step tool integration, and emergent interactions. It raises some risk factors for existing governance approaches as they remain fragmented: Existing frameworks are either static taxonomies driven; however, they lack an integrated end-to-end pipeline from risk identification to operational assurance, especially for an agentic platform. We propose AGENTSAFE, a practical governance framework for LLM-based agentic systems. The framework operationalises the AI Risk Repository into design, runtime, and audit controls, offering a governance framework for risk identification and assurance. The proposed framework, AGENTSAFE, profiles agentic loops (plan -> act -> observe -> reflect) and toolchains, and maps risks onto structured taxonomies extended with agent-specific vulnerabilities. It introduces safeguards that constrain risky behaviours, escalates high-impact actions to human oversight, and evaluates systems through pre-deployment scenario banks spanning security, privacy, fairness, and systemic safety. During deployment, AGENTSAFE ensures continuous governance through semantic telemetry, dynamic authorization, anomaly detection, and interruptibility mechanisms. Provenance and accountability are reinforced through cryptographic tracing and organizational controls, enabling measurable, auditable assurance across the lifecycle of agentic AI systems. The key contributions of this paper are: (1) a unified governance framework that translates risk taxonomies into actionable design, runtime, and audit controls; (2) an Agent Safety Evaluation methodology that provides measurable pre-deployment assurance; and (3) a set of runtime governance and accountability mechanisms that institutionalise trust in agentic AI ecosystems.